Vote-boosting ensembles
نویسندگان
چکیده
Abstract—Vote-boosting is a sequential ensemble learning method in which individual classifiers are built on different weighted versions of the training data. To build a new classifier, the weight of each training instance is determined as a function of the disagreement rate of the current ensemble predictions for that particular instance. Experiments using the symmetric beta distribution as the emphasis function and different base learners are used to illustrate the properties and to analyze the performance of these types of ensembles. In classification problems with low or no class-label noise, when simple base learners are used, vote-boosting behaves as if it were an interpolation between bagging and standard boosting (e.g. AdaBoost), depending on the value of the shape parameter of the beta distribution. In terms of predictive accuracy the best results, which are comparable or better than random forests, are obtained with vote-boosting ensembles of random trees.
منابع مشابه
‘Fuzzy’ vs ‘Non-fuzzy’ in Combining Classifiers Designed by Boosting
Boosting is recognized as one of the most successful techniques for generating classifier ensembles. Typically, the classifier outputs are combined by the weighted majority vote. The purpose of this study is to demonstrate the advantages of some fuzzy combination methods for ensembles of classifiers designed by Boosting. We ran 2-fold cross-validation experiments on 6 benchmark data sets to com...
متن کاملExamining the Relationship Between Majority Vote Accuracy and Diversity in Bagging and Boosting
Much current research is undertaken into combining classifiers to increase the classification accuracy. We show, by means of an enumerative example, how combining classifiers can lead to much greater or lesser accuracy than each individual classifier. Measures of diversity among the classifiers taken from the literature are shown to only exhibit a weak relationship with majority vote accuracy. ...
متن کاملExamining the Relationship Between Majority Vote Ac - curacy and Diversity in Bagging and
Much current research is undertaken into combining classifiers to increase the classification accuracy. We show, by means of an enumerative example, how combining classifiers can lead to much greater or lesser accuracy than each individual classifier. Measures of diversity among the classifiers taken from the literature are shown to only exhibit a weak relationship with majority vote accuracy. ...
متن کاملBoosting in the Limit: Maximizing the Margin of Learned Ensembles
The “minimum margin” of an ensemble classifier on a given training set is, roughly speaking, the smallest vote it gives to any correct training label. Recent work has shown that the Adaboost algorithm is particularly effective at producing ensembles with large minimum margins, and theory suggests that this may account for its success at reducing generalization error. We note, however, that the ...
متن کاملRandom Ordinality Ensembles A Novel Ensemble Method for Multi-valued Categorical Data
Data with multi-valued categorical attributes can cause major problems for decision trees. The high branching factor can lead to data fragmentation, where decisions have little or no statistical support. In this paper, we propose a new ensemble method, Random Ordinality Ensembles (ROE), that circumvents this problem, and provides significantly improved accuracies over other popular ensemble met...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1606.09458 شماره
صفحات -
تاریخ انتشار 2016